134 research outputs found

    Successive Refinement of Abstract Sources

    Get PDF
    In successive refinement of information, the decoder refines its representation of the source progressively as it receives more encoded bits. The rate-distortion region of successive refinement describes the minimum rates required to attain the target distortions at each decoding stage. In this paper, we derive a parametric characterization of the rate-distortion region for successive refinement of abstract sources. Our characterization extends Csiszar's result to successive refinement, and generalizes a result by Tuncel and Rose, applicable for finite alphabet sources, to abstract sources. This characterization spawns a family of outer bounds to the rate-distortion region. It also enables an iterative algorithm for computing the rate-distortion region, which generalizes Blahut's algorithm to successive refinement. Finally, it leads a new nonasymptotic converse bound. In all the scenarios where the dispersion is known, this bound is second-order optimal. In our proof technique, we avoid Karush-Kuhn-Tucker conditions of optimality, and we use basic tools of probability theory. We leverage the Donsker-Varadhan lemma for the minimization of relative entropy on abstract probability spaces.Comment: Extended version of a paper presented at ISIT 201

    Nonasymptotic noisy lossy source coding

    Get PDF
    This paper shows new general nonasymptotic achievability and converse bounds and performs their dispersion analysis for the lossy compression problem in which the compressor observes the source through a noisy channel. While this problem is asymptotically equivalent to a noiseless lossy source coding problem with a modified distortion function, nonasymptotically there is a noticeable gap in how fast their minimum achievable coding rates approach the common rate-distortion function, as evidenced both by the refined asymptotic analysis (dispersion) and the numerical results. The size of the gap between the dispersions of the noisy problem and the asymptotically equivalent noiseless problem depends on the stochastic variability of the channel through which the compressor observes the source.Comment: IEEE Transactions on Information Theory, 201

    Optimal Causal Rate-Constrained Sampling for a Class of Continuous Markov Processes

    Get PDF
    Consider the following communication scenario. An encoder observes a stochastic process and causally decides when and what to transmit about it, under a constraint on bits transmitted per second. A decoder uses the received codewords to causally estimate the process in real time. The encoder and the decoder are synchronized in time. We aim to find the optimal encoding and decoding policies that minimize the end-to-end estimation mean-square error under the rate constraint. For a class of continuous Markov processes satisfying regularity conditions, we show that the optimal encoding policy transmits a 1-bit codeword once the process innovation passes one of two thresholds. The optimal decoder noiselessly recovers the last sample from the 1-bit codewords and codeword-generating time stamps, and uses it as the running estimate of the current process, until the next codeword arrives. In particular, we show the optimal causal code for the Ornstein-Uhlenbeck process and calculate its distortion-rate function

    Lossy joint source-channel coding in the finite blocklength regime

    Get PDF
    This paper finds new tight finite-blocklength bounds for the best achievable lossy joint source-channel code rate, and demonstrates that joint source-channel code design brings considerable performance advantage over a separate one in the non-asymptotic regime. A joint source-channel code maps a block of kk source symbols onto a length−n-n channel codeword, and the fidelity of reproduction at the receiver end is measured by the probability ϵ\epsilon that the distortion exceeds a given threshold dd. For memoryless sources and channels, it is demonstrated that the parameters of the best joint source-channel code must satisfy nC−kR(d)≈nV+kV(d)Q(ϵ)nC - kR(d) \approx \sqrt{nV + k \mathcal V(d)} Q(\epsilon), where CC and VV are the channel capacity and channel dispersion, respectively; R(d)R(d) and V(d)\mathcal V(d) are the source rate-distortion and rate-dispersion functions; and QQ is the standard Gaussian complementary cdf. Symbol-by-symbol (uncoded) transmission is known to achieve the Shannon limit when the source and channel satisfy a certain probabilistic matching condition. In this paper we show that even when this condition is not satisfied, symbol-by-symbol transmission is, in some cases, the best known strategy in the non-asymptotic regime

    Data Compression with Low Distortion and Finite Blocklength

    Get PDF
    This paper considers lossy source coding of n-dimensional memoryless sources and shows an explicit approximation to the minimum source coding rate required to sustain the probability of exceeding distortion d no greater than ϵ, which is simpler than known dispersion-based approximations. Our approach takes inspiration in the celebrated classical result stating that the Shannon lower bound to rate-distortion function becomes tight in the limit d → 0. We formulate an abstract version of the Shannon lower bound that recovers both the classical Shannon lower bound and the rate-distortion function itself as special cases. Likewise, we show that a nonasymptotic version of the abstract Shannon lower bound recovers all previously known nonasymptotic converses. A necessary and sufficient condition for the Shannon lower bound to be attained exactly is presented. It is demonstrated that whenever that condition is met, the rate-dispersion function is given simply by the varentropy of the source. Remarkably, all finite alphabet sources with balanced distortion measures satisfy that condition in the range of low distortions. Most continuous sources violate that condition. Still, we show that lattice quantizers closely approach the nonasymptotic Shannon lower bound, provided that the source density is smooth enough and the distortion is low. This implies that fine multidimensional lattice coverings are nearly optimal in the rate-distortion sense even at finite . The achievability proof technique is based on a new bound on the output entropy of lattice quantizers in terms of the differential entropy of the source, the lattice cell size, and a smoothness parameter of the source density. The technique avoids both the usual random coding argument and the simplifying assumption of the presence of a dither signal

    Data compression with low distortion and finite blocklength

    Get PDF
    This paper considers lossy source coding of n-dimensional continuous memoryless sources with low mean-square error distortion and shows a simple, explicit approximation to the minimum source coding rate. More precisely, a nonasymptotic version of Shannon's lower bound is presented. Lattice quantizers are shown to approach that lower bound, provided that the source density is smooth enough and the distortion is low, which implies that fine multidimensional lattice coverings are nearly optimal in the rate-distortion sense even at finite n. The achievability proof technique avoids both the usual random coding argument and the simplifying assumption of the presence of a dither signal

    Symbol Error Rates of Maximum-Likelihood Detector: Convex/Concave Behavior and Applications

    Get PDF
    Convexity/concavity properties of symbol error rates (SER) of the maximum likelihood detector operating in the AWGN channel (non-fading and fading) are studied. Generic conditions are identified under which the SER is a convex/concave function of the SNR. Universal bounds for the SER 1st and 2nd derivatives are obtained, which hold for arbitrary constellations and are tight for some of them. Applications of the results are discussed, which include optimum power allocation in spatial multiplexing systems, optimum power/time sharing to decrease or increase (jamming problem) error rate, and implication for fading channels.Comment: To appear in 2007 IEEE International Symposium on Information Theory (ISIT 2007), Nice, June 200

    Fixed-length lossy compression in the finite blocklength regime

    Get PDF
    This paper studies the minimum achievable source coding rate as a function of blocklength nn and probability ϵ\epsilon that the distortion exceeds a given level dd. Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by R(d)+V(d)nQ−1(ϵ)R(d) + \sqrt{\frac{V(d)}{n}} Q^{-1}(\epsilon), where R(d)R(d) is the rate-distortion function, V(d)V(d) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and Q−1(ϵ)Q^{-1}(\epsilon) is the inverse of the standard Gaussian complementary cdf

    Joint source-channel coding with feedback

    Get PDF
    This paper quantifies the fundamental limits of variable-length transmission of a general (possibly analog) source over a memoryless channel with noiseless feedback, under a distortion constraint. We consider excess distortion, average distortion and guaranteed distortion (dd-semifaithful codes). In contrast to the asymptotic fundamental limit, a general conclusion is that allowing variable-length codes and feedback leads to a sizable improvement in the fundamental delay-distortion tradeoff. In addition, we investigate the minimum energy required to reproduce kk source samples with a given fidelity after transmission over a memoryless Gaussian channel, and we show that the required minimum energy is reduced with feedback and an average (rather than maximal) power constraint.Comment: To appear in IEEE Transactions on Information Theor
    • …
    corecore